cfa exam
Reasoning Models Ace the CFA Exams
Patel, Jaisal, Chen, Yunzhe, He, Kaiwen, Wang, Keyi, Li, David, Xiao, Kairong, Liu, Xiao-Yang
Previous research has reported that large language models (LLMs) demonstrate poor performance on the Chartered Financial Analyst (CFA) exams. However, recent reasoning models have achieved strong results on graduate-level academic and professional examinations across various disciplines. In this paper, we evaluate state-of-the-art reasoning models on a set of mock CFA exams consisting of 980 questions across three Level I exams, two Level II exams, and three Level III exams. Using the same pass/fail criteria from prior studies, we find that most models clear all three levels. The models that pass, ordered by overall performance, are Gemini 3.0 Pro, Gemini 2.5 Pro, GPT-5, Grok 4, Claude Opus 4.1, and DeepSeek-V3.1. Specifically, Gemini 3.0 Pro achieves a record score of 97.6% on Level I. Performance is also strong on Level II, led by GPT-5 at 94.3%. On Level III, Gemini 2.5 Pro attains the highest score with 86.4% on multiple-choice questions while Gemini 3.0 Pro achieves 92.0% on constructed-response questions.
- North America > United States > North Carolina > Orange County > Chapel Hill (0.04)
- North America > United States > New York > Rensselaer County > Troy (0.04)
- North America > United States > Florida > Miami-Dade County > Miami (0.04)
- Asia > South Korea (0.04)
THaLLE: Text Hyperlocally Augmented Large Language Extension -- Technical Report
Labs, KBTG, Khamnuansin, Danupat, Petchsod, Atthakorn, Lertpiya, Anuruth, Balee, Pornchanan, Lodkaew, Thanawat, Chalothorn, Tawunrat, Pongthawornkamol, Thadpong, Lertsutthiwong, Monchai
Large Language Models (LLMs) have emerged as leading tools in Natural Language Processing (NLP) due to their exceptional performance across various tasks. The advent of open-source models such as Llama [1] from Meta, Gemma [2] from Google, and Qwen [3] from Alibaba has significantly enhanced public access to advanced LLMs. Additionally, low-cost techniques for LLM fine-tuning, such as Low-rank Adaptation (LoRA) [4], have enabled the fine-tuning of these models on consumer-grade hardware, thereby accelerating their development and adoption. LLMs are now utilized in a wide array of applications, ranging from personal assistants, i.e., ChatGPT, to specialized tasks in diverse domains. In the financial sector, BloombergGPT [5], a proprietary LLM trained from the ground up with an infusion of financial data, has demonstrated superior performance on financial benchmarks compared to other models in the market.
- Europe > Monaco (0.04)
- Asia > Middle East > Jordan (0.04)
Can GPT models be Financial Analysts? An Evaluation of ChatGPT and GPT-4 on mock CFA Exams
Callanan, Ethan, Mbakwe, Amarachi, Papadimitriou, Antony, Pei, Yulong, Sibue, Mathieu, Zhu, Xiaodan, Ma, Zhiqiang, Liu, Xiaomo, Shah, Sameena
Large Language Models (LLMs) have demonstrated remarkable performance on a wide range of Natural Language Processing (NLP) tasks, often matching or even beating state-of-the-art task-specific models. This study aims at assessing the financial reasoning capabilities of LLMs. We leverage mock exam questions of the Chartered Financial Analyst (CFA) Program to conduct a comprehensive evaluation of ChatGPT and GPT-4 in financial analysis, considering Zero-Shot (ZS), Chain-of-Thought (CoT), and Few-Shot (FS) scenarios. We present an in-depth analysis of the models' performance and limitations, and estimate whether they would have a chance at passing the CFA exams. Finally, we outline insights into potential strategies and improvements to enhance the applicability of LLMs in finance. In this perspective, we hope this work paves the way for future studies to continue enhancing LLMs for financial reasoning through rigorous evaluation.
- Information Technology > Data Science > Data Mining > Big Data (0.40)
- Information Technology > Communications > Social Media (0.40)
- Information Technology > Artificial Intelligence (0.40)